This paper presents a subsampling-task paradigm for data-driven task-specific experiment design (ED) and a novel method in populationwide supervised feature selection (FS). Optimal ED, the choice of sampling points under constraints of limited acquisition-time, arises in a wide variety of scientific and engineering contexts. However the continuous optimization used in classical approaches depend on a-priori parameter choices and challenging non-convex optimization landscapes. This paper proposes to replace this strategy with a subsampling-task paradigm, analogous to populationwide supervised FS. In particular, we introduce JOFSTO, which performs JOint Feature Selection and Task Optimization. JOFSTO jointly optimizes two coupled networks: one for feature scoring, which provides the ED, the other for execution of a downstream task or process. Unlike most FS problems, e.g. selecting protein expressions for classification, ED problems typically select from highly correlated globally informative candidates rather than seeking a small number of highly informative features among many uninformative features. JOFSTO's construction efficiently identifies potentially correlated, but effective subsets and returns a trained task network. We demonstrate the approach using parameter estimation and mapping problems in clinically-relevant applications in quantitative MRI and in hyperspectral imaging. Results from simulations and empirical data show the subsampling-task paradigm strongly outperforms classical ED, and within our paradigm, JOFSTO outperforms state-of-the-art supervised FS techniques. JOFSTO extends immediately to wider image-based ED problems and other scenarios where the design must be specified globally across large numbers of acquisitions. Code will be released.
translated by 谷歌翻译
Error correction techniques have been used to refine the output sentences from automatic speech recognition (ASR) models and achieve a lower word error rate (WER) than original ASR outputs. Previous works usually use a sequence-to-sequence model to correct an ASR output sentence autoregressively, which causes large latency and cannot be deployed in online ASR services. A straightforward solution to reduce latency, inspired by non-autoregressive (NAR) neural machine translation, is to use an NAR sequence generation model for ASR error correction, which, however, comes at the cost of significantly increased ASR error rate. In this paper, observing distinctive error patterns and correction operations (i.e., insertion, deletion, and substitution) in ASR, we propose FastCorrect, a novel NAR error correction model based on edit alignment. In training, FastCorrect aligns each source token from an ASR output sentence to the target tokens from the corresponding ground-truth sentence based on the edit distance between the source and target sentences, and extracts the number of target tokens corresponding to each source token during edition/correction, which is then used to train a length predictor and to adjust the source tokens to match the length of the target sentence for parallel generation. In inference, the token number predicted by the length predictor is used to adjust the source tokens for target sequence generation. Experiments on the public AISHELL-1 dataset and an internal industrial-scale ASR dataset show the effectiveness of FastCorrect for ASR error correction: 1) it speeds up the inference by 6-9 times and maintains the accuracy (8-14% WER reduction) compared with the autoregressive correction model; and 2) it outperforms the popular NAR models adopted in neural machine translation and text edition by a large margin.
translated by 谷歌翻译
Patient triage at emergency departments (EDs) is necessary to prioritize care for patients with critical and time-sensitive conditions. Different tools are used for patient triage and one of the most common ones is the emergency severity index (ESI), which has a scale of five levels, where level 1 is the most urgent and level 5 is the least urgent. This paper proposes a framework for utilizing machine learning to develop an e-triage tool that can be used at EDs. A large retrospective dataset of ED patient visits is obtained from the electronic health record of a healthcare provider in the Midwest of the US for three years. However, the main challenge of using machine learning algorithms is that most of them have many parameters and without optimizing these parameters, developing a high-performance model is not possible. This paper proposes an approach to optimize the hyperparameters of machine learning. The metaheuristic optimization algorithms simulated annealing (SA) and adaptive simulated annealing (ASA) are proposed to optimize the parameters of extreme gradient boosting (XGB) and categorical boosting (CaB). The newly proposed algorithms are SA-XGB, ASA-XGB, SA-CaB, ASA-CaB. Grid search (GS), which is a traditional approach used for machine learning fine-tunning is also used to fine-tune the parameters of XGB and CaB, which are named GS-XGB and GS-CaB. The six algorithms are trained and tested using eight data groups obtained from the feature selection phase. The results show ASA-CaB outperformed all the proposed algorithms with accuracy, precision, recall, and f1 of 83.3%, 83.2%, 83.3%, 83.2%, respectively.
translated by 谷歌翻译
Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area.
translated by 谷歌翻译
Rates of missing data often depend on record-keeping policies and thus may change across times and locations, even when the underlying features are comparatively stable. In this paper, we introduce the problem of Domain Adaptation under Missingness Shift (DAMS). Here, (labeled) source data and (unlabeled) target data would be exchangeable but for different missing data mechanisms. We show that when missing data indicators are available, DAMS can reduce to covariate shift. Focusing on the setting where missing data indicators are absent, we establish the following theoretical results for underreporting completely at random: (i) covariate shift is violated (adaptation is required); (ii) the optimal source predictor can perform worse on the target domain than a constant one; (iii) the optimal target predictor can be identified, even when the missingness rates themselves are not; and (iv) for linear models, a simple analytic adjustment yields consistent estimates of the optimal target parameters. In experiments on synthetic and semi-synthetic data, we demonstrate the promise of our methods when assumptions hold. Finally, we discuss a rich family of future extensions.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains. However, deploying visual RL techniques in the real world remains challenging due to their low sample efficiency and large generalization gaps. To tackle these obstacles, data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies by diversifying the training data. This survey aims to provide a timely and essential review of DA techniques in visual RL in recognition of the thriving development in this field. In particular, we propose a unified framework for analyzing visual RL and understanding the role of DA in it. We then present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data in different scenarios. Moreover, we report a systematic empirical evaluation of DA-based techniques in visual RL and conclude by highlighting the directions for future research. As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
translated by 谷歌翻译
心脏磁共振(CMR)序列随着时间的推移可视化心脏功能的体素。同时,基于深度学习的可变形图像注册能够估计离散的向量字段,这些矢量字段将CMR序列的一个时间步骤扭曲为以下方式,以一种自我监督的方式。但是,尽管这些3D+T向量领域中包含的信息来源丰富,但标准化的解释具有挑战性,到目前为止,临床应用仍然有限。在这项工作中,我们展示了如何有效使用可变形的矢量场来描述心脏周期的基本动态过程,形式是派生的1D运动描述符。此外,基于收缩或放松心室的预期心血管生理特性,我们定义了一组规则,可以鉴定五个心血管阶段,包括末端 - 末端(ES)和末端diastole(ED),而无需使用标签的使用情况。我们评估了运动描述符在两个具有挑战性的多疾病, - 中心, - 扫描式短轴CMR数据集上的合理性。首先,通过报告定量措施,例如提取相的周期性框架差异。其次,通过定性地比较一般模式,当我们时间重新样本和对齐两个数据集的所有实例的运动描述符时。我们方法的ED,ES密钥阶段的平均周期框架差为0.80 \ pm {0.85} $,$ 0.69 \ pm {0.79} $,比观察者间的可变性略好($ 1.07 \ pm {0.86} $, $ 0.91 \ pm {1.6} $)和监督基线方法($ 1.18 \ pm {1.91} $,$ 1.21 \ pm {1.78} $)。代码和标签将在我们的GitHub存储库中提供。 https://github.com/cardio-ai/cmr-phase-detection
translated by 谷歌翻译
电力市场的清算通常是通过开环预测到优化的(O-PO)流程实施的:它首先预测可再生能源的可用功率(RES)和系统储备要求;然后,鉴于预测,通过优化模型,即单位承诺(UC)和经济调度(ED)来清除市场,以追求最佳的电力市场经济。但是,市场经济可能会遭受开环进程的困扰,因为其预测可能过于近视优化,即,预测试图改善即时的统计预测错误,而不是最终的市场经济。为此,本文提出了一个基于三级混合智能编程的闭环预测和优化(C-PO)框架,该框架训练针对市场清除优化的面向经济的预测因素,以改善最终的优化。市场经济。具体而言,高层根据其诱导的市场经济来训练面向经济的RES和储备预测指标。具有给定的预测的中层和较低水平模仿市场清除过程,并将诱发的市场经济喂养回到上层。然后将训练有素的面向经济的预测变量嵌入UC模型中,形成一个规范性的UC模型,该模型可以同时提供带有增强市场经济的Res-Reserve预测和UC决策。关于IEEE 118总线系统的数值案例研究说明了C-PO的潜在经济和实用优势,而不是O-PO,健壮的UC和随机UC。
translated by 谷歌翻译
旨在从文本中检测事件并对其进行分类的事件检测(ED)对于理解现实生活中的实际情况至关重要。但是,主流事件检测模型需要触发器的高质量专家人类注释,这通常是昂贵的,因此阻止了ED在新领域的应用。因此,在本文中,我们专注于无触发器的低资源,并旨在应对以下艰巨的挑战:多标签分类,线索不足和事件分布不平衡。我们通过机器阅读理解(DRC)框架提出了一种新颖的无触发ED方法。更具体地说,我们将输入文本视为上下文,并将其与所有事件类型的令牌相连,后者被视为答案,并忽略了默认问题。因此,我们可以利用预训练的语言模型中的自我发作来吸收输入文本和事件类型之间的语义关系。此外,我们设计了一个简单而有效的事件毁灭模块(EDM),以防止大型事件过度学习,从而产生更平衡的训练过程。实验结果表明,我们提出的无触发ED模型与基于主流触发器的模型非常有竞争力,显示了其在低源事件检测上的强劲性能。
translated by 谷歌翻译